468 research outputs found

    Hero or Villain: The New Jersey Consumer Fraud Act

    Get PDF

    Within-socket Myoelectric Prediction of Continuous Ankle Kinematics for Control of a Powered Transtibial Prosthesis

    Get PDF
    Objective. Powered robotic prostheses create a need for natural-feeling user interfaces and robust control schemes. Here, we examined the ability of a nonlinear autoregressive model to continuously map the kinematics of a transtibial prosthesis and electromyographic (EMG) activity recorded within socket to the future estimates of the prosthetic ankle angle in three transtibial amputees. Approach. Model performance was examined across subjects during level treadmill ambulation as a function of the size of the EMG sampling window and the temporal \u27prediction\u27 interval between the EMG/kinematic input and the model\u27s estimate of future ankle angle to characterize the trade-off between model error, sampling window and prediction interval. Main results. Across subjects, deviations in the estimated ankle angle from the actual movement were robust to variations in the EMG sampling window and increased systematically with prediction interval. For prediction intervals up to 150 ms, the average error in the model estimate of ankle angle across the gait cycle was less than 6°. EMG contributions to the model prediction varied across subjects but were consistently localized to the transitions to/from single to double limb support and captured variations from the typical ankle kinematics during level walking. Significance. The use of an autoregressive modeling approach to continuously predict joint kinematics using natural residual muscle activity provides opportunities for direct (transparent) control of a prosthetic joint by the user. The model\u27s predictive capability could prove particularly useful for overcoming delays in signal processing and actuation of the prosthesis, providing a more biomimetic ankle response

    An imPRESsive mimic

    Full text link
    Received for publication April 22, 2009; revision received June 18, 2009; and accepted June 19, 2009.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/78628/1/j.1537-2995.2009.02362.x.pd

    The effect of hot dense hydrogen and argon in a ballistic compressor on the structure and composition of pure iron

    Get PDF
    An experimental study of pure iron foil exposed to a hot, dense hydrogen and argon gas mixture in a ballistic compressor yielded evidence of structural and compositional changes of the metal due to the presence of the hydrogen gas. Three iron foils have been compared, one of unexposed pure iron, another of pure iron exposed to a mixture of hydrogen and argon gas, and the third of pure iron exposed to argon alone. Exposure to these high temperature, high pressure gases took place in a ballistic compressor. Line formations were found on the surface of the iron foil exposed to both hydrogen and argon. These appeared as \u27V\u27- or \u27W\u27-shaped configurations, giving the appearance of a serrated edge. Such lines were not found for the other two iron foils. Characteristic peaks of energy dispersive x-ray spectra yield different surface concentrations of oxygen when each iron foil sample is compared. This concentration is much less for iron foil exposed to both hydrogen and argon gases than for the other two samples. Also a larger carbon peak was found for the former sample, when compared to the latter two. A shift in the 200 x-ray diffraction peak by one degree 29 was observed for the sample exposed to hydrogen and argon, and a \u27triple\u27 peak was observed for the 310 plane for the same iron sample

    Learning from Monte Carlo Rollouts with Opponent Models for Playing Tron

    Get PDF
    This paper describes a novel reinforcement learning system for learning to play the game of Tron. The system combines Q-learning, multi-layer perceptrons, vision grids, opponent modelling, and Monte Carlo rollouts in a novel way. By learning an opponent model, Monte Carlo rollouts can be effectively applied to generate state trajectories for all possible actions from which improved action estimates can be computed. This allows to extend experience replay by making it possible to update the state-action values of all actions in a given game state simultaneously. The results show that the use of experience replay that updates the Q-values of all actions simultaneously strongly outperforms the conventional experience replay that only updates the Q-value of the performed action. The results also show that using short or long rollout horizons during training lead to similar good performances against two fixed opponents
    • …
    corecore